Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Opt Express ; 32(5): 7404-7416, 2024 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-38439421

RESUMO

Structured beams carrying topological defects, namely phase and Stokes singularities, have gained extensive interest in numerous areas of optics. The non-separable spin and orbital angular momentum states of hybridly polarized Stokes singular beams provide additional freedom for manipulating optical fields. However, the characterization of hybridly polarized Stokes vortex beams remains challenging owing to the degeneracy associated with the complex polarization structures of these beams. In addition, experimental noise factors such as relative phase, amplitude, and polarization difference together with beam fluctuations add to the perplexity in the identification process. Here, we present a generalized diffraction-based Stokes polarimetry approach assisted with deep learning for efficient identification of Stokes singular beams. A total of 15 classes of beams are considered based on the type of Stokes singularity and their associated mode indices. The resultant total and polarization component intensities of Stokes singular beams after diffraction through a triangular aperture are exploited by the deep neural network to recognize these beams. Our approach presents a classification accuracy of 98.67% for 15 types of Stokes singular beams that comprise several degenerate cases. The present study illustrates the potential of diffraction of the Stokes singular beam with polarization transformation, modeling of experimental noise factors, and a deep learning framework for characterizing hybridly polarized beams.

2.
Appl Opt ; 62(15): 3989-3999, 2023 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-37706710

RESUMO

Multispectral quantitative phase imaging (MS-QPI) is a high-contrast label-free technique for morphological imaging of the specimens. The aim of the present study is to extract spectral dependent quantitative information in single-shot using a highly spatially sensitive digital holographic microscope assisted by a deep neural network. There are three different wavelengths used in our method: λ=532, 633, and 808 nm. The first step is to get the interferometric data for each wavelength. The acquired datasets are used to train a generative adversarial network to generate multispectral (MS) quantitative phase maps from a single input interferogram. The network was trained and validated on two different samples: the optical waveguide and MG63 osteosarcoma cells. Validation of the present approach is performed by comparing the predicted MS phase maps with numerically reconstructed (F T+T I E) phase maps and quantifying with different image quality assessment metrices.


Assuntos
Aprendizado Profundo , Holografia , Interferometria , Redes Neurais de Computação
3.
Autophagy ; 19(10): 2769-2788, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37405374

RESUMO

Mitochondria are susceptible to damage resulting from their activity as energy providers. Damaged mitochondria can cause harm to the cell and thus mitochondria are subjected to elaborate quality-control mechanisms including elimination via lysosomal degradation in a process termed mitophagy. Basal mitophagy is a house-keeping mechanism fine-tuning the number of mitochondria according to the metabolic state of the cell. However, the molecular mechanisms underlying basal mitophagy remain largely elusive. In this study, we visualized and assessed the level of mitophagy in H9c2 cardiomyoblasts at basal conditions and after OXPHOS induction by galactose adaptation. We used cells with a stable expression of a pH-sensitive fluorescent mitochondrial reporter and applied state-of-the-art imaging techniques and image analysis. Our data showed a significant increase in acidic mitochondria after galactose adaptation. Using a machine-learning approach we also demonstrated increased mitochondrial fragmentation by OXPHOS induction. Furthermore, super-resolution microscopy of live cells enabled capturing of mitochondrial fragments within lysosomes as well as dynamic transfer of mitochondrial contents to lysosomes. Applying correlative light and electron microscopy we revealed the ultrastructure of the acidic mitochondria confirming their proximity to the mitochondrial network, ER and lysosomes. Finally, exploiting siRNA knockdown strategy combined with flux perturbation with lysosomal inhibitors, we demonstrated the importance of both canonical as well as non-canonical autophagy mediators in lysosomal degradation of mitochondria after OXPHOS induction. Taken together, our high-resolution imaging approaches applied on H9c2 cells provide novel insights on mitophagy during physiologically relevant conditions. The implication of redundant underlying mechanisms highlights the fundamental importance of mitophagy.Abbreviations: ATG: autophagy related; ATG7: autophagy related 7; ATP: adenosine triphosphate; BafA1: bafilomycin A1; CLEM: correlative light and electron microscopy; EGFP: enhanced green fluorescent protein; MAP1LC3B: microtubule associated protein 1 light chain 3 beta; OXPHOS: oxidative phosphorylation; PepA: pepstatin A; PLA: proximity ligation assay; PRKN: parkin RBR E3 ubiquitin protein ligase; RAB5A: RAB5A, member RAS oncogene family; RAB7A: RAB7A, member RAS oncogene family; RAB9A: RAB9A, member RAS oncogene family; ROS: reactive oxygen species; SIM: structured illumination microscopy; siRNA: short interfering RNA; SYNJ2BP: synaptojanin 2 binding protein; TEM: transmission electron microscopy; TOMM20: translocase of outer mitochondrial membrane 20; ULK1: unc-51 like kinase 1.


Assuntos
Autofagia , Mitofagia , Mitofagia/genética , Galactose/metabolismo , Mitocôndrias/metabolismo , Membranas Mitocondriais/metabolismo , Ubiquitina-Proteína Ligases/metabolismo
4.
J Vis Exp ; (193)2023 03 03.
Artigo em Inglês | MEDLINE | ID: mdl-36939264

RESUMO

The quantitative analysis of subcellular organelles such as mitochondria in cell fluorescence microscopy images is a demanding task because of the inherent challenges in the segmentation of these small and morphologically diverse structures. In this article, we demonstrate the use of a machine learning-aided segmentation and analysis pipeline for the quantification of mitochondrial morphology in fluorescence microscopy images of fixed cells. The deep learning-based segmentation tool is trained on simulated images and eliminates the requirement for ground truth annotations for supervised deep learning. We demonstrate the utility of this tool on fluorescence microscopy images of fixed cardiomyoblasts with a stable expression of fluorescent mitochondria markers and employ specific cell culture conditions to induce changes in the mitochondrial morphology.


Assuntos
Processamento de Imagem Assistida por Computador , Aprendizado de Máquina , Processamento de Imagem Assistida por Computador/métodos , Microscopia de Fluorescência , Mitocôndrias , Aprendizado de Máquina Supervisionado
5.
Biomed Opt Express ; 13(10): 5495-5516, 2022 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-36425635

RESUMO

Mitochondria play a crucial role in cellular metabolism. This paper presents a novel method to visualize mitochondria in living cells without the use of fluorescent markers. We propose a physics-guided deep learning approach for obtaining virtually labeled micrographs of mitochondria from bright-field images. We integrate a microscope's point spread function in the learning of an adversarial neural network for improving virtual labeling. We show results (average Pearson correlation 0.86) significantly better than what was achieved by state-of-the-art (0.71) for virtual labeling of mitochondria. We also provide new insights into the virtual labeling problem and suggest additional metrics for quality assessment. The results show that our virtual labeling approach is a powerful way of segmenting and tracking individual mitochondria in bright-field images, results previously achievable only for fluorescently labeled mitochondria.

6.
Biomed Opt Express ; 12(1): 191-210, 2021 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-33659075

RESUMO

Image denoising or artefact removal using deep learning is possible in the availability of supervised training dataset acquired in real experiments or synthesized using known noise models. Neither of the conditions can be fulfilled for nanoscopy (super-resolution optical microscopy) images that are generated from microscopy videos through statistical analysis techniques. Due to several physical constraints, a supervised dataset cannot be measured. Further, the non-linear spatio-temporal mixing of data and valuable statistics of fluctuations from fluorescent molecules that compete with noise statistics. Therefore, noise or artefact models in nanoscopy images cannot be explicitly learned. Here, we propose a robust and versatile simulation-supervised training approach of deep learning auto-encoder architectures for the highly challenging nanoscopy images of sub-cellular structures inside biological samples. We show the proof of concept for one nanoscopy method and investigate the scope of generalizability across structures, and nanoscopy algorithms not included during simulation-supervised training. We also investigate a variety of loss functions and learning models and discuss the limitation of existing performance metrics for nanoscopy images. We generate valuable insights for this highly challenging and unsolved problem in nanoscopy, and set the foundation for the application of deep learning problems in nanoscopy for life sciences.

7.
Opt Express ; 28(25): 37199-37208, 2020 Dec 07.
Artigo em Inglês | MEDLINE | ID: mdl-33379558

RESUMO

High resolution microscopy is heavily dependent on superb optical elements and superresolution microscopy even more so. Correcting unavoidable optical aberrations during post-processing is an elegant method to reduce the optical system's complexity. A prime method that promises superresolution, aberration correction, and quantitative phase imaging is Fourier ptychography. This microscopy technique combines many images of the sample, recorded at differing illumination angles akin to computed tomography and uses error minimisation between the recorded images with those generated by a forward model. The more precise knowledge of those illumination angles is available for the image formation forward model, the better the result. Therefore, illumination estimation from the raw data is an important step and supports correct phase recovery and aberration correction. Here, we derive how illumination estimation can be cast as an object detection problem that permits the use of a fast convolutional neural network (CNN) for this task. We find that faster-RCNN delivers highly robust results and outperforms classical approaches by far with an up to 3-fold reduction in estimation errors. Intriguingly, we find that conventionally beneficial smoothing and filtering of raw data is counterproductive in this type of application. We present a detailed analysis of the network's performance and provide all our developed software openly.

8.
Opt Express ; 28(24): 36229-36244, 2020 Nov 23.
Artigo em Inglês | MEDLINE | ID: mdl-33379722

RESUMO

Quantitative phase microscopy (QPM) is a label-free technique that enables monitoring of morphological changes at the subcellular level. The performance of the QPM system in terms of spatial sensitivity and resolution depends on the coherence properties of the light source and the numerical aperture (NA) of objective lenses. Here, we propose high space-bandwidth quantitative phase imaging using partially spatially coherent digital holographic microscopy (PSC-DHM) assisted with a deep neural network. The PSC source synthesized to improve the spatial sensitivity of the reconstructed phase map from the interferometric images. Further, compatible generative adversarial network (GAN) is used and trained with paired low-resolution (LR) and high-resolution (HR) datasets acquired from the PSC-DHM system. The training of the network is performed on two different types of samples, i.e. mostly homogenous human red blood cells (RBC), and on highly heterogeneous macrophages. The performance is evaluated by predicting the HR images from the datasets captured with a low NA lens and compared with the actual HR phase images. An improvement of 9× in the space-bandwidth product is demonstrated for both RBC and macrophages datasets. We believe that the PSC-DHM + GAN approach would be applicable in single-shot label free tissue imaging, disease classification and other high-resolution tomography applications by utilizing the longitudinal spatial coherence properties of the light source.


Assuntos
Eritrócitos/citologia , Holografia/métodos , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos , Macrófagos/citologia , Microscopia de Contraste de Fase/métodos , Redes Neurais de Computação , Humanos
9.
Biomed Opt Express ; 11(9): 5017-5031, 2020 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-33014597

RESUMO

Optical coherence tomography (OCT) is being increasingly adopted as a label-free and non-invasive technique for biomedical applications such as cancer and ocular disease diagnosis. Diagnostic information for these tissues is manifest in textural and geometric features of the OCT images, which are used by human expertise to interpret and triage. However, it suffers delays due to the long process of the conventional diagnostic procedure and shortage of human expertise. Here, a custom deep learning architecture, LightOCT, is proposed for the classification of OCT images into diagnostically relevant classes. LightOCT is a convolutional neural network with only two convolutional layers and a fully connected layer, but it is shown to provide excellent training and test results for diverse OCT image datasets. We show that LightOCT provides 98.9% accuracy in classifying 44 normal and 44 malignant (invasive ductal carcinoma) breast tissue volumetric OCT images. Also, >96% accuracy in classifying public datasets of ocular OCT images as normal, age-related macular degeneration and diabetic macular edema. Additionally, we show ∼96% test accuracy for classifying retinal images as belonging to choroidal neovascularization, diabetic macular edema, drusen, and normal samples on a large public dataset of more than 100,000 images. The performance of the architecture is compared with transfer learning based deep neural networks. Through this, we show that LightOCT can provide significant diagnostic support for a variety of OCT images with sufficient training and minimal hyper-parameter tuning. The trained LightOCT networks for the three-classification problem will be released online to support transfer learning on other datasets.

10.
Sci Rep ; 10(1): 13118, 2020 08 04.
Artigo em Inglês | MEDLINE | ID: mdl-32753627

RESUMO

Sperm cell motility and morphology observed under the bright field microscopy are the only criteria for selecting a particular sperm cell during Intracytoplasmic Sperm Injection (ICSI) procedure of Assisted Reproductive Technology (ART). Several factors such as oxidative stress, cryopreservation, heat, smoking and alcohol consumption, are negatively associated with the quality of sperm cell and fertilization potential due to the changing of subcellular structures and functions which are overlooked. However, bright field imaging contrast is insufficient to distinguish tiniest morphological cell features that might influence the fertilizing ability of sperm cell. We developed a partially spatially coherent digital holographic microscope (PSC-DHM) for quantitative phase imaging (QPI) in order to distinguish normal sperm cells from sperm cells under different stress conditions such as cryopreservation, exposure to hydrogen peroxide and ethanol. Phase maps of total 10,163 sperm cells (2,400 control cells, 2,750 spermatozoa after cryopreservation, 2,515 and 2,498 cells under hydrogen peroxide and ethanol respectively) are reconstructed using the data acquired from the PSC-DHM system. Total of seven feedforward deep neural networks (DNN) are employed for the classification of the phase maps for normal and stress affected sperm cells. When validated against the test dataset, the DNN provided an average sensitivity, specificity and accuracy of 85.5%, 94.7% and 85.6%, respectively. The current QPI + DNN framework is applicable for further improving ICSI procedure and the diagnostic efficiency for the classification of semen quality in regard to their fertilization potential and other biomedical applications in general.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Microscopia , Estresse Oxidativo , Razão Sinal-Ruído , Espermatozoides/citologia , Espermatozoides/metabolismo , Criopreservação , Etanol/farmacologia , Humanos , Peróxido de Hidrogênio/farmacologia , Masculino , Estresse Oxidativo/efeitos dos fármacos , Espermatozoides/efeitos dos fármacos
11.
Sensors (Basel) ; 19(19)2019 Sep 28.
Artigo em Inglês | MEDLINE | ID: mdl-31569337

RESUMO

Ultrasound based structural health monitoring of piezoelectric material is challenging if a damage changes at a microscale over time. Classifying geometrically similar damages with a difference in diameter as small as 100 µ m is difficult using conventional sensing and signal analysis approaches. Here, we use an unconventional ultrasound sensing approach that collects information of the entire bulk of the material and investigate the applicability of machine learning approaches for classifying such similar defects. Our results show that appropriate feature design combined with simple k-nearest neighbor classifier can provide up to 98% classification accuracy even though conventional features for time-series data and a variety of classifiers cannot achieve close to 70% accuracy. The newly proposed hybrid feature, which combines frequency domain information in the form of power spectral density and time domain information in the form of sign of slope change, is a suitable feature for achieving the best classification accuracy on this challenging problem.

12.
Sci Rep ; 8(1): 4988, 2018 03 21.
Artigo em Inglês | MEDLINE | ID: mdl-29563529

RESUMO

Localization microscopy and multiple signal classification algorithm use temporal stack of image frames of sparse emissions from fluorophores to provide super-resolution images. Localization microscopy localizes emissions in each image independently and later collates the localizations in all the frames, giving same weight to each frame irrespective of its signal-to-noise ratio. This results in a bias towards frames with low signal-to-noise ratio and causes cluttered background in the super-resolved image. User-defined heuristic computational filters are employed to remove a set of localizations in an attempt to overcome this bias. Multiple signal classification performs eigen-decomposition of the entire stack, irrespective of the relative signal-to-noise ratios of the frames, and uses a threshold to classify eigenimages into signal and null subspaces. This results in under-representation of frames with low signal-to-noise ratio in the signal space and over-representation in the null space. Thus, multiple signal classification algorithms is biased against frames with low signal-to-noise ratio resulting into suppression of the corresponding fluorophores. This paper presents techniques to automatically debias localization microscopy and multiple signal classification algorithm of these biases without compromising their resolution and without employing heuristics, user-defined criteria. The effect of debiasing is demonstrated through five datasets of invitro and fixed cell samples.

13.
Sci Rep ; 7(1): 4445, 2017 06 30.
Artigo em Inglês | MEDLINE | ID: mdl-28667336

RESUMO

This paper presents eigen-analysis of image stack of blinking fluorophores to identify the components that enable super-resolved imaging of blinking fluorophores. Eigen-analysis reveals that the contributions of spatial distribution of fluorophores and their temporal photon emission characteristics can be completely separated. While cross-emitter cross-pixel information of spatial distribution that permits super-resolution is encoded in two matrices, temporal statistics weigh the contribution of these matrices to the measured data. The properties and conditions of exploitation of these matrices are investigated. Con-temporary super-resolution imaging methods that use blinking for super-resolution are studied in the context of the presented analysis. Besides providing insight into the capabilities and limitations of existing super-resolution methods, the analysis shall help in designing better super-resolution techniques that directly exploit these matrices.

14.
J Opt Soc Am A Opt Image Sci Vis ; 33(12): 2491-2500, 2016 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-27906276

RESUMO

This paper addresses the problem of horizon detection, a fundamental process in numerous object detection algorithms, in a maritime environment. The maritime environment is characterized by the absence of fixed features, the presence of numerous linear features in dynamically changing objects and background and constantly varying illumination, rendering the typically simple problem of detecting the horizon a challenging one. We present a novel method called multi-scale consistence of weighted edge Radon transform, abbreviated as MuSCoWERT. It detects the long linear features consistent over multiple scales using multi-scale median filtering of the image followed by Radon transform on a weighted edge map and computing the histogram of the detected linear features. We show that MuSCoWERT has excellent performance, better than seven other contemporary methods, for 84 challenging maritime videos, containing over 33,000 frames, and captured using visible range and near-infrared range sensors mounted onboard, onshore, or on floating buoys. It has a median error of about 2 pixels (less than 0.2%) from the center of the actual horizon and a median angular error of less than 0.4 deg. We are also sharing a new challenging horizon detection dataset of 65 videos of visible, infrared cameras for onshore and onboard ship camera placement.

16.
Sensors (Basel) ; 16(3)2016 Mar 22.
Artigo em Inglês | MEDLINE | ID: mdl-27011185

RESUMO

We propose a method for classifying radiometric oceanic color data measured by hyperspectral satellite sensors into known spectral classes, irrespective of the downwelling irradiance of the particular day, i.e., the illumination conditions. The focus is not on retrieving the inherent optical properties but to classify the pixels according to the known spectral classes of the reflectances from the ocean. The method compensates for the unknown downwelling irradiance by white balancing the radiometric data at the ocean pixels using the radiometric data of bright pixels (typically from clouds). The white-balanced data is compared with the entries in a pre-calibrated lookup table in which each entry represents the spectral properties of one class. The proposed approach is tested on two datasets of in situ measurements and 26 different daylight illumination spectra for medium resolution imaging spectrometer (MERIS), moderate-resolution imaging spectroradiometer (MODIS), sea-viewing wide field-of-view sensor (SeaWiFS), coastal zone color scanner (CZCS), ocean and land colour instrument (OLCI), and visible infrared imaging radiometer suite (VIIRS) sensors. Results are also shown for CIMEL's SeaPRISM sun photometer sensor used on-board field trips. Accuracy of more than 92% is observed on the validation dataset and more than 86% is observed on the other dataset for all satellite sensors. The potential of applying the algorithms to non-satellite and non-multi-spectral sensors mountable on airborne systems is demonstrated by showing classification results for two consumer cameras. Classification on actual MERIS data is also shown. Additional results comparing the spectra of remote sensing reflectance with level 2 MERIS data and chlorophyll concentration estimates of the data are included.

17.
J Opt Soc Am A Opt Image Sci Vis ; 32(7): 1390-402, 2015 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-26367171

RESUMO

This paper presents metrics and statistics of frequency of occurrence of metamerism in three consumer cameras, viz., Canon 1D Mark III, Nikon D40, and Sony α7, using spectral and RGB images of natural scenes. Both sensor metamerism and observer metamerism of the cameras' sensors are studied. We use the concept of dissimilarity of two spectral power distributions in the spectral domain and the RGB domain for studying the occurrence sensor metamerism. Specifically, we use angular difference and digital equivalence approaches for this purpose. For studying the occurrence observer metamerism, we use the weighted Nimeroff's index for dissimilarity in the spectral domain with respect to the CIE color space along with the conventionally used CIE LAB color difference for dissimilarity in the CIE color space. The statistics of the frequency of occurrence of metamerism are generated on a dataset of 423 spectral images of indoor scenes in 5 illumination conditions and outdoor scenes in natural illumination conditions. Experiments show that about 18%-22% of the pixels in the images are metameric in the sense of angular difference. It is also observed that 1%-4% of the colors that would have appeared similar to human eyes are reproduced as distinct colors in the cameras. Dataset and details can be found at https://sites.google.com/site/dilipprasad/source-codes.

18.
Appl Opt ; 54(20): 6146-54, 2015 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-26193386

RESUMO

This paper discusses the design of an additional spectral filter (i.e., a fourth channel) to be used with existing camera sensors such that the camera's modified color gamut overlaps almost the full CIE XYZ color gamut. The proposed approach leverages on the matrix-R theory that states that the space of metamerism of a sensor, known as the metameric black space, can be determined directly from the camera's spectral sensitivities. Using this metameric black space, a novel fourth channel has been designed on the sensor that can expand the camera's gamut. The effectiveness of this idea has been demonstrated for five commercial cameras, Munsell color chips, and images taken under various illuminations. It is shown that the designed fourth channel is very effective in fitting the camera's color gamuts to CIE XYZ color gamut, reducing CIE LAB colorimetric distances, as well as the color differences between the camera's XYZ images and the true CIE XYZ images.

19.
IEEE Trans Image Process ; 23(10): 4297-310, 2014 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-25122569

RESUMO

We propose a simple and fast method for tangent estimation of digital curves. This geometric-based method uses a small local region for tangent estimation and has a definite upper bound error for continuous as well as digital conics, i.e., circles, ellipses, parabolas, and hyperbolas. Explicit expressions of the upper bounds for continuous and digitized curves are derived, which can also be applied to nonconic curves. Our approach is benchmarked against 72 contemporary tangent estimation methods and demonstrates good performance for conic, nonconic, and noisy curves. In addition, we demonstrate a good multigrid and isotropic performance and low computational complexity of O(1) and better performance than most methods in terms of maximum and average errors in tangent computation for a large variety of digital curves.

20.
J Opt Soc Am A Opt Image Sci Vis ; 31(5): 1049-58, 2014 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-24979637

RESUMO

Color constancy is a well-studied topic in color vision. Methods are generally categorized as (1) low-level statistical methods, (2) gamut-based methods, and (3) learning-based methods. In this work, we distinguish methods depending on whether they work directly from color values (i.e., color domain) or from values obtained from the image's spatial information (e.g., image gradients/frequencies). We show that spatial information does not provide any additional information that cannot be obtained directly from the color distribution and that the indirect aim of spatial-domain methods is to obtain large color differences for estimating the illumination direction. This finding allows us to develop a simple and efficient illumination estimation method that chooses bright and dark pixels using a projection distance in the color distribution and then applies principal component analysis to estimate the illumination direction. Our method gives state-of-the-art results on existing public color constancy datasets as well as on our newly collected dataset (NUS dataset) containing 1736 images from eight different high-end consumer cameras.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...